Flattening Layer Pruning in Convolutional Neural Networks

نویسندگان

چکیده

The rapid growth of performance in the field neural networks has also increased their sizes. Pruning methods are getting more and attention order to overcome problem non-impactful parameters overgrowth neurons. In this article, application Global Sensitivity Analysis (GSA) demonstrates impact input variables on model’s output variables. GSA gives ability mark out least meaningful arguments build reduction algorithms these. Using several popular datasets, study shows how different levels pruning correlate network accuracy negligibly accuracy. doing so, pre- post-reduction sizes compared. This paper Sobol FAST with common norms can largely decrease size a network, while keeping relatively high. On basis obtained results, it is possible create thesis about asymmetry between elements removed from topology quality network.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Compact Deep Convolutional Neural Networks With Coarse Pruning

The learning capability of a neural network improves with increasing depth at higher computational costs. Wider layers with dense kernel connectivity patterns furhter increase this cost and may hinder real-time inference. We propose feature map and kernel level pruning for reducing the computational complexity of a deep convolutional neural network. Pruning feature maps reduces the width of a l...

متن کامل

Coarse Pruning of Convolutional Neural Networks with Random Masks

The learning capability of a neural network improves with increasing depth at higher computational costs. Wider layers with dense kernel connectivity patterns further increase this cost and may hinder real-time inference. We propose feature map and kernel pruning for reducing the computational complexity of a deep convolutional neural network. Due to coarse nature, these pruning granularities c...

متن کامل

Pruning Convolutional Neural Networks for Resource Efficient Transfer Learning

We propose a new framework for pruning convolutional kernels in neural networks to enable efficient inference, focusing on transfer learning where large and potentially unwieldy pretrained networks are adapted to specialized tasks. We interleave greedy criteria-based pruning with fine-tuning by backpropagation—a computationally efficient procedure that maintains good generalization in the prune...

متن کامل

Pruning Convolutional Neural Networks for Image Instance Retrieval

In this work, we focus on the problem of image instance retrieval with deep descriptors extracted from pruned Convolutional Neural Networks (CNN). The objective is to heavily prune convolutional edges while maintaining retrieval performance. To this end, we introduce both data-independent and data-dependent heuristics to prune convolutional edges, and evaluate their performance across various c...

متن کامل

Pruning Convolutional Neural Networks for Resource Efficient Inference

We propose a new formulation for pruning convolutional kernels in neural networks to enable efficient inference. We interleave greedy criteria-based pruning with finetuning by backpropagation—a computationally efficient procedure that maintains good generalization in the pruned network. We propose a new criterion based on Taylor expansion that approximates the change in the cost function induce...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Symmetry

سال: 2021

ISSN: ['0865-4824', '2226-1877']

DOI: https://doi.org/10.3390/sym13071147